33 research outputs found

    PACT: An initiative to introduce computational thinking to second-level education in Ireland

    Get PDF
    PACT (Programming ∧ Algorithms ⇒ Computational Thinking) is a partnership between researchers in the Department of Computer Science at Maynooth University and teachers at selected post-primary schools around Ireland. Starting in September 2013, seven Irish secondary schools took part in a pilot study, delivering material prepared by the PACT team to Transition Year students. Three areas of Computer Science were identified as being key to delivering a successful course in computational thinking, namely, programming, algorithms and computability. An overview of the PACT module is provided, as well as analysis of the feedback obtained from students and teachers involved in delivering the initial pilot

    Applications and challenges of marker-assisted selection in the Western Australian Wheat Breeding Program

    Get PDF
    Interactions with the extracellular matrix (ECM) through integrin adhesion receptors provide cancer cells with physical and chemical cues that act together with growth factors to support survival and proliferation. Antagonists that target integrins containing the beta1 subunit inhibit tumor growth and sensitize cells to irradiation or cytotoxic chemotherapy in preclinical breast cancer models and are under clinical investigation. We found that the loss of beta1 integrins attenuated breast tumor growth but markedly enhanced tumor cell dissemination to the lungs. When cultured in three-dimensional ECM scaffolds, antibodies that blocked beta1 integrin function or knockdown of beta1 switched the migratory behavior of human and mouse E-cadherin-positive triple-negative breast cancer (TNBC) cells from collective to single cell movement. This switch involved activation of the transforming growth factor-beta (TGFbeta) signaling network that led to a shift in the balance between miR-200 microRNAs and the transcription factor zinc finger E-box-binding homeobox 2 (ZEB2), resulting in suppressed transcription of the gene encoding E-cadherin. Reducing the abundance of a TGFbeta receptor, restoring the ZEB/miR-200 balance, or increasing the abundance of E-cadherin reestablished cohesion in beta1 integrin-deficient cells and reduced dissemination to the lungs without affecting growth of the primary tumor. These findings reveal that beta1 integrins control a signaling network that promotes an epithelial phenotype and suppresses dissemination and indicate that targeting beta1 integrins may have undesirable effects in TNBC

    Holistic Cube Analysis: A Query Framework for Data Insights

    Full text link
    We present Holistic Cube Analysis (HoCA), a framework that augments the capabilities of relational queries for data insights. We first define AbstractCube, a data type defined as a function from RegionFeatures space to relational tables. AbstractCube provides a logical form of data for HoCA operators and their compositions to operate on to analyze the data. This function-as-data modeling allows us to simultaneously capture a space of non-uniform tables on the co-domain of the function, and region space structure on the domain of the function. We describe two HoCA operators, cube crawling and cube join, which are cube-to-cube transformations (i.e., higher-order functions). Cube crawling explores a region subspace, and outputs a cube mapping regions to signal vectors. Cube join, in turn, allows users to meld information in different cubes, which is critical for composition. The cube crawling interface introduces two novel features: (1) Region Analysis Models (RAMs), which allows one to program and organize analysis on a set of data features into a module. (2) Multi-Model Crawling, which allows one to apply multiple models, potentially on different feature sets, during crawling. These two features, together with cube join and a rich RAM library, allows us to construct succinct HoCA programs to capture a wide variety of data-insight problems in system monitoring, experimentation analysis, and business intelligence. HoCA poses a rich algorithmic design space, such as optimizing crawling performance leveraging region space structure, optimizing cube join performance, and physical designs of cubes. We describe several cube crawling implementations leveraging different foundations (an in-house relational query engine, and Apache Beam), and evaluate their performance characteristics. Finally, we discuss avenues in extending the framework, such as devising more useful HoCA operators.Comment: Establishing core concepts of HoC

    Standardizing data reporting in the research community to enhance the utility of open data for SARS-CoV-2 wastewater surveillance

    Get PDF
    SARS-CoV-2 RNA detection in wastewater is being rapidly developed and adopted as a public health monitoring tool worldwide. With wastewater surveillance programs being implemented across many different scales and by many different stakeholders, it is critical that data collected and shared are accompanied by an appropriate minimal amount of meta-information to enable meaningful interpretation and use of this new information source and intercomparison across datasets. While some databases are being developed for specific surveillance programs locally, regionally, nationally, and internationally, common globally-adopted data standards have not yet been established within the research community. Establishing such standards will require national and international consensus on what meta-information should accompany SARS-CoV-2 wastewater measurements. To establish a recommendation on minimum information to accompany reporting of SARS-CoV-2 occurrence in wastewater for the research community, the United States National Science Foundation (NSF) Research Coordination Network on Wastewater Surveillance for SARS-CoV-2 hosted a workshop in February 2021 with participants from academia, government agencies, private companies, wastewater utilities, public health laboratories, and research institutes. This report presents the primary two outcomes of the workshop: (i) a recommendation on the set of minimum meta-information that is needed to confidently interpret wastewater SARS-CoV-2 data, and (ii) insights from workshop discussions on how to improve standardization of data reporting

    Could Silver Bow Creek tributaries serve as source populations to recolonize Silver Bow Creek?

    No full text
    A remediation goal in Silver Bow Creek is to restore trout populations which provide recreational angling opportunity and are an indication of ecosystem recovery. Remediation has been in progress since 1998. All of the mainstem fish populations were lost as the creek received more than a century of copper mine tailings. Several tributary fish populations survived, including genetically unaltered westslope cutthroat trout Oncorhynchus clarki lewisi in German Gulch. In 2009, we tagged 977 fish (259 westslope cutthroat trout, 664 brook trout Salvelinus fontinalis, 54 longnose sucker Catostomus catostomus) with Passive Integrated Transponder (PIT) tags in three tributaries to Silver Bow Creek; German Gulch, Brown’s Gulch, and Blacktail Creek. Stationary antennas at tributary confluences continuously monitored the timing and direction of fish movements between Aug-7 and Nov-15, 2009. In German Gulch 7.4% (n=256) of tagged westslope cutthroat trout and 8.0% (n=158) of tagged brook trout moved into Silver Bow Creek. 23.1% (n=169) of the brook trout tagged in Brown’s Gulch moved into Silver Bow Creek. Once they moved into the mainstem, nearly all fish remained in Silver Bow Creek throughout the study. In Blacktail Creek, only 1.1% (n=350) of the fish tagged were detected in Silver Bow Creek. Of 54 longnose sucker tagged, none moved into Silver Bow. German Gulch and Brown’s Gulch trout populations may provide an important population subsidy to Silver Bow Creek’s nascent trout population. Blacktail Creek’s relatively large population of brook trout may be functionally disconnected from Silver Bow Creek, and may not provide the same benefit to upper Silver Bow Creek

    Water quality and physical habitat effects on trout distribution and abundance in Silver Bow Creek

    No full text
    Uncontrolled disposal of mining wastes in the Butte mining districts resulted in extirpation of fishes from Silver Bow Creek throughout the 20th century. Superfund remediation has been ongoing in the watershed since 1998 and is near completion. Overall, metal concentrations in Silver Bow Creek are reduced from pre-remediation levels however. However, the stream is influenced by municipal sewage, and during midsummer, hypoxia has been observed at night downstream from the wastewater discharge. Despite the water quality problems, six fish species, including three sensitive salmonids, now inhabit Silver Bow Creek. To evaluate the success of remediation in reestablishing salmonid populations, spatially-continuous fish abundance and habitat surveys were conducted in coordination with synoptic water quality measurements throughout 34 stream km during the summer of 2011. An extensive stream portion (≈6 km) had low dissolved oxygen (DO/L), and minimum DO concentrations wer

    Sentence-Level Event Classification in Unstructured Texts

    No full text
    The ability to correctly classify sentences that describe events is an important task for many natural language applications such as Question Answering (QA) and Text Summarisation. In this paper, we treat event detection as a sentence level text classification problem. We compare the performance of two approaches to this task: a Support Vector Machine (SVM) classifier and a Language Modeling (LM) approach. We also investigate a rule-based method that uses hand-crafted lists of ‘trigger’ terms derived from WordNet. We use two datasets in our experiments and test each approach using six different event types, i.e, Die, Attack, Injure, Meet, Transport and Charge-Indict. Our experimental results indicate that although the trained SVM classifier consistently outperforms the language modeling approach, our rule-based system marginally outperforms the trained SVM classifier on three of our six event types. We also observe that overall performance is greatly affected by the type of corpus used to train the algorithms. Specifically, we have found that a homogeneous training corpus that contains many instances of a specific event type (i.e., Die events in the recent Iraqi war) produces a poorer performing classifier than one trained on a heterogeneous dataset containing more diverse instances of the event (i.e.,Die events in many different settings, for example, traffic accidents, natural disasters etc.). Our heterogeneous dataset is provided by the ACE (Automatic Content Extraction) initiative, while our novel homogeneous dataset consists of news articles and annotated Die events from the Iraq Body Count (IBC) database. Overall, our results show that the techniques presented here are effective solutions to the event classification task described in this paper, where F1 scores of over 90% are achieved.Irish Research Council for Science, Engineering and TechnologyTechnical report numbers ucd-csi-2008-06 and ucd-csi-2008-07 are identical; only one copy has been retained
    corecore